232 research outputs found

    Optimal capacity decisions of airlines under supply-demand equilibrium

    Get PDF
    In the last three decades, airlines across the globe have experienced significant incidents and milestones such economic recessions, de-regulations, and jet fuel fluctuations, leading to many consolidations and even bankruptcies. Airlines seem to have a few options to respond to these disruptions and fluctuations. Capacity planning is one of the key tools that airlines apply to manage air traffic demand and their operating costs. As such, the carriers may alter the number of flights, use different types of airplanes, upgrade the seats in the aircraft, and even increase the load factor to maintain their market share and profitability, which can occasionally lead to passenger dissatisfaction. 'Capacity Planning' is defined in this research as a combination of the number of flights and aircraft size that airlines choose to manage traffic demand on a given origin-destination route. It affects the airlines' service quality and operating costs, in turn, influencing their market share and profitability. Capacity planning has become more important for airlines due to the diminishing relative significance of traditional tools such as airfare management or hedging contracts. However, capacity planning seems to be a difficult decision-making task for airlines as they need to consider many factors on both sides of the supply-demand equilibrium of the flight market and different limitations such as access to specific aircrafts, airports, or even flight regulations. Any changes in the capacity would trigger a sophisticated set of interrelated changes in passenger demand, flight frequency, aircraft size, airfare, and flight delay, finally leading to an equilibrium shift. This statement considers economies of density that means, given no congestion, more density in terms of higher passenger demand leads to more plane-miles by either more flights or larger aircrafts. In fact, with no capacity constraints, there is an ongoing loop causing higher density from the demand side and more plane-miles from the supply side of the flight equilibrium. However, this picture is no longer valid once the capacity constraint is added to the equilibrium. Capacity constraint introduces a new player, flight delay, to the equilibrium. In other words, higher density leads to more flight delays because of capacity constraints. Flight delays bring extra costs to airlines, diminishing economies of density. Therefore, airlines need to consider all these interrelated interactions to make efficient capacity plans on their operating networks. This thesis develops an optimisation model to assist airlines to make the optimum capacity decisions for individual routes of a given market such as a specific airport or network to maximise the potential passenger demand under the flight supply-demand equilibrium. To address this research, three key questions are identified as follows: What are the key determinants of airlines' capacity decisions under the supply-demand equilibrium of flight market? How does an airline's capacity decision influence flight delays? How can airline capacity decisions be optimised for the individual routes of a given market to maximise the total potential flight demand with respect to the market's capacity constraints? Furthermore, this research answers some significant questions related to the interactions among the key players of the supply-demand equilibrium of the flight market. To answer these questions, this research is implemented in three steps. In the first step, the key drivers of capacity planning and demand modelling are statistically identified on both sides of the supply-demand equilibrium by applying the two-stage least square technique on the time-series cross-sectional data of 21 major routes of the Australian domestic market. In the second step, the impact of changes in the elements of capacity decisions in flight delay are investigated by using the Hausman-Taylor regression technique on the Australian domestic data. By connecting the findings of step 1 and 2, a research framework is created to be used as the basis of the optimisation algorithm in the final step. The model is developed by the inclusion of a series of exogenous and endogenous factors under the supply-demand equilibrium. To address the simultaneity among the variables, a system of four non-linear equations, flight demand, flight frequency, aircraft size, and flight delay, is developed and estimated individually by two statistical simultaneous techniques - three-stage least square technique (3SLS) and maximum likelihood estimator (MLE). The data of seven Australian domestic routes, linking Melbourne to other major cities in Australia, was applied, as the case study, to estimate the model's coefficients. Finally, the non-linear optimisation technique was applied to the estimates of 3SLS and MLE separately to find the optimum capacity plan of the given routes. All proposed models were verified and tested in different steps. As the key contribution, this thesis proposes an optimisation model based on a system of non-linear equations of the flight supply-demand equilibrium to maximise the potential flight demand of a given market with respect to the market's capacity constraints. This model is based on the theory of economies of density and applied the time-series cross-sectional data of flight market to empirically estimate the coefficients of passenger demand equation as the objective function. Compared to other models of capacity planning that generally contain a relatively a short list of micro-level factors in modelling, the proposed model contains all required macro- and micro-level factors. As the key contribution, this thesis highlights the key drivers of capacity planning and demand modeling of supply-demand equilibrium and their relationships in the Australian flight domestic market. As a part of results, there is a bilateral relation among the elements of capacity decisions and passenger demand. The results statistically differentiate the airlines' policies of capacity planning across the different markets. The results suggest that a higher demand for flights primarily results in increased flight frequency rather than increased aircraft size or load factor. The load factor is identified to be an insignificant variable in capacity planning of the airlines. Competition between airlines, participation of low-cost carriers, and jet fuel expenses are thought to influence airlines' capacity decisions, albeit differently across the given markets. Interestingly, jet fuel cost inflations stimulate the flight demand in the short-haul market as well as the routes linking the major cities to the industrial ones. The socio-economic parameters of population and employment rates affect the flight demand in the different markets in different ways. The findings indicate the airlines' capacity decisions influence flight delays. The results indicate that more frequent flights and larger aircrafts together are associated with more flight delays. Route congestion is caused by more flights, albeit to a higher degree for low-cost carriers. Jet fuel cost inflation is expected to cause flight delays, but more for the legacy airlines than low-cost carriers. From the results of the optimisation model, for a given period, December 2015, the optimum solutions of 3SLS and MLE indicate, respectively, a 1.72% and 0.66% improvement on the flight demand compared to the reported actual plan for the airlines. The estimated MSE of the MLE model is smaller than that of 3SLS; however, estimated coefficients of 3SLS are statistically more significant than those of MLE, resulting in more practical results in the optimisation section. The proposed model and findings of this thesis can potentially be applied by airlines as well as policy makers to fleet planning and airport infrastructure development projects in different airports and hub-and-spoke networks across the globe. The proposed optimisation model may be enhanced by using the theory of full equilibrium to develop the optimisation model through adding the factors of the other transportation modes. Due to the data limitation, airfare was only applied as an exogenous parameter in the passenger demand equation of the optimisation model. Airfare can potentially be upgraded to become a key variable of airline capacity planning under the supply-demand equilibrium. In future research, the data of individual airlines can be applied separately at the route level. With the airline dimension in modelling, further explorations can be done on the airline's policies and performance of capacity planning in different markets. The proposed model can potentially be applied to other airports and hub-and-spoke networks across the globe which it surely leads to further explorations about the airlines' policies and capacity planning as well as the demand modelling under the supply-demand equilibrium

    Assessing 2 D-dimer age-adjustment strategies to optimize computed tomographic use in ED evaluation of pulmonary embolism

    Get PDF
    STUDY OBJECTIVE: Validate the sensitivity and specificity of 2 age adjustment strategies for d-dimer values in identifying patients at risk for pulmonary embolism (PE) compared with traditional D-dimer cutoff value (500 ng/mL) to decrease inappropriate computed tomography pulmonary angiography (CTPA) use. METHODS: This institutional review board-approved, Health Insurance Portability and Accountability Act-compliant retrospective study included all adult emergency department patients evaluated for PE over a 32-month period (1/1/11-8/30/13). Only patients undergoing CTPA and D-dimer testing were included. We used a validated natural language processing algorithm to parse CTPA radiology reports and determine the presence of acute PE. Outcome measures were sensitivity and specificity of 2 age-adjusted D-dimer cutoffs compared with the traditional cutoff. We used chi2 tests with proportional analyses to assess differences in traditional and age-adjusted (agex10 ng/mL) D-dimer cutoffs, adjusting both by decade and by year. RESULTS: A total 3063 patients with suspected PE were evaluated by CTPA during the study period, and 1055 (34%) also received d-dimer testing. The specificity of age-adjusted D-dimer values was similar or higher for each age group studied compared with traditional cutoff, without significantly compromising sensitivity. Overall, had decade age-adjusted cutoffs been used, 37 CTPAs could have been avoided (19.6% of 189 patients aged \u3e60 years with Wells score50 years with Wells scor

    The use of decision support to measure documented adherence to a national imaging quality measure

    Get PDF
    RATIONALE AND OBJECTIVES: Present methods for measuring adherence to national imaging quality measures often require a resource-intensive chart review. Computerized decision support systems may allow for automated capture of these data. We sought to determine the feasibility of measuring adherence to a national quality measure (NQM) regarding computed tomography pulmonary angiograms (CTPAs) for pulmonary embolism using measure-targeted clinical decision support and whether the associated increased burden of data captured required by this system would affect the use and yield of CTs. MATERIALS AND METHODS: This institutional review board-approved prospective cohort study enrolled patients from September 1, 2009, through November 30, 2011, in the emergency department (ED) of a 776-bed quaternary-care adults-only academic medical center. Our intervention consisted of an NQM-targeted clinical decision support tool for CTPAs, which required mandatory input of the Wells criteria and serum D-dimer level. The primary outcome was the documented adherence to the quality measure prior and subsequent to the intervention, and the secondary outcomes were the use and yield of CTPAs. RESULTS: A total of 1209 patients with suspected PE (2.0% of 58,795 ED visits) were imaged by CTPA during the 12-month control period, and 1212 patients were imaged in the 12 months after the quarter during which the intervention was implemented (2.0% of 59,478 ED visits, P = .84). Documented baseline adherence to the NQM was 56.9% based on a structured review of the provider notes. After implementation, documented adherence increased to 75.6% (P \u3c .01). CTPA yield remained unchanged and was 10.4% during the control period and 10.1% after the intervention (P = .88). CONCLUSIONS: Implementation of a clinical decision support tool significantly improved documented adherence to an NQM, enabling automated measurement of provider adherence to evidence without the need for resource-intensive chart review. It did not adversely affect the use or yield of CTPAs

    Excitonic Emission of Monolayer Semiconductors Near-Field Coupled to High-Q Microresonators.

    Get PDF
    We present quantum yield measurements of single layer WSe2 (1L-WSe2) integrated with high-Q ( Q > 106) optical microdisk cavities, using an efficient (η > 90%) near-field coupling scheme based on a tapered optical fiber. Coupling of the excitonic emission is achieved by placing 1L-WSe2 in the evanescent cavity field. This preserves the microresonator high intrinsic quality factor ( Q > 106) below the bandgap of 1L-WSe2. The cavity quantum yield is QYc ≈ 10-3, consistent with operation in the broad emitter regime (i.e., the emission lifetime of 1L-WSe2 is significantly shorter than the bare cavity decay time). This scheme can serve as a precise measurement tool for the excitonic emission of layered materials into cavity modes, for both in plane and out of plane excitation

    The requirements and challenges in preventing of road traffic injury in Iran. A qualitative study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Road traffic injuries (RTIs) are a major public health problem, especially in low- and middle-income countries. Among middle-income countries, Iran has one of the highest mortality rates from RTIs. Action is critical to combat this major public health problem. Stakeholders involved in RTI control are of key importance and their perceptions of barriers and facilitators are a vital source of knowledge. The aim of this study was to explore barriers to the prevention of RTIs and provide appropriate suggestions for prevention, based on the perceptions of stakeholders, victims and road-users as regards RTIs.</p> <p>Methods</p> <p>Thirty-eight semi-structured interviews were conducted with informants in the field of RTI prevention including: police officers; public health professionals; experts from the road administrators; representatives from the General Governor, the car industry, firefighters; experts from Emergency Medical Service and the Red Crescent; and some motorcyclists and car drivers as well as victims of RTIs. A qualitative approach using grounded theory method was employed to analyze the material gathered.</p> <p>Results</p> <p>The core variable was identified as "The lack of a system approach to road-user safety". The following barriers in relation to RTI prevention were identified as: human factors; transportation system; and organizational coordination. Suggestions for improvement included education (for the general public and targeted group training), more effective legislation, more rigorous law enforcement, improved engineering in road infrastructure, and an integrated organization to supervise and coordinate preventive activities.</p> <p>Conclusion</p> <p>The major barriers identified in this study were human factors and efforts to change human behaviour were suggested by means of public education campaigns and stricter law enforcement. However, the lack of a system approach to RTI prevention was also an important concern. There is an urgent need for both an integrated system to coordinate RTI activities and prevention and a major change in stakeholders' attitudes towards RTI prevention. The focus of all activities should take place on road users' safety.</p

    Exploring corrections to the optomechanical Hamiltonian

    Get PDF
    We compare two approaches for deriving corrections to the “linear model” of cavity optomechanics, in order to describe effects that are beyond first order in the radiation pressure coupling. In the regime where the mechanical frequency is much lower than the cavity one, we compare: (I) a widely used phenomenological Hamiltonian conserving the photon number; (II) a two-mode truncation of C. K. Law’s microscopic model, which we take as the “true” system Hamiltonian. While these approaches agree at first order, the latter model does not conserve the photon number, resulting in challenging computations. We find that approach (I) allows for several analytical predictions, and significantly outperforms the linear model in our numerical examples. Yet, we also find that the phenomenological Hamiltonian cannot fully capture all high-order corrections arising from the C. K. Law model

    Discerning Tumor Status from Unstructured MRI Reports—Completeness of Information in Existing Reports and Utility of Automated Natural Language Processing

    Get PDF
    Information in electronic medical records is often in an unstructured free-text format. This format presents challenges for expedient data retrieval and may fail to convey important findings. Natural language processing (NLP) is an emerging technique for rapid and efficient clinical data retrieval. While proven in disease detection, the utility of NLP in discerning disease progression from free-text reports is untested. We aimed to (1) assess whether unstructured radiology reports contained sufficient information for tumor status classification; (2) develop an NLP-based data extraction tool to determine tumor status from unstructured reports; and (3) compare NLP and human tumor status classification outcomes. Consecutive follow-up brain tumor magnetic resonance imaging reports (2000–­2007) from a tertiary center were manually annotated using consensus guidelines on tumor status. Reports were randomized to NLP training (70%) or testing (30%) groups. The NLP tool utilized a support vector machines model with statistical and rule-based outcomes. Most reports had sufficient information for tumor status classification, although 0.8% did not describe status despite reference to prior examinations. Tumor size was unreported in 68.7% of documents, while 50.3% lacked data on change magnitude when there was detectable progression or regression. Using retrospective human classification as the gold standard, NLP achieved 80.6% sensitivity and 91.6% specificity for tumor status determination (mean positive predictive value, 82.4%; negative predictive value, 92.0%). In conclusion, most reports contained sufficient information for tumor status determination, though variable features were used to describe status. NLP demonstrated good accuracy for tumor status classification and may have novel application for automated disease status classification from electronic databases
    corecore